Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Efficient robust zero-watermarking algorithm for 3D medical images based on ray-casting sampling and quaternion orthogonal moment
Jian GAO, Zhi LI, Bin FAN, Chuanxian JIANG
Journal of Computer Applications    2023, 43 (4): 1191-1197.   DOI: 10.11772/j.issn.1001-9081.2021050746
Abstract260)   HTML7)    PDF (2206KB)(121)       Save

Aiming at the copyright protection problem of 3D medical images and the simultaneous expansion problem of watermark storage capacity caused by the increase of the number of images to be protected, a robust zero-watermarking algorithm based on ray-casting sampling and polar complex exponential moment was proposed. Firstly, a sampling algorithm based on ray-casting was proposed to sample the features of 3D medical images composed of multiple sequences of 2D medical images and describe these features in 2D image space. Secondly, a robust zero-watermarking algorithm for 3D medical images was proposed. In the algorithm, three 2D feature images of coronal, sagittal planes and cross section of the 3D medical image were obtained by ray-casting sampling, and the three 2D feature images were transformed by polar complex exponential to obtain the quaternion orthogonal moment. Finally, the zero-watermarking information was constructed by using the quadratic orthogonal moment and Logistic chaotic encryption. Simulation results show that the proposed algorithm can maintain the bit correctness rate of zero-watermarking extraction above 0.920 0 under various common image processing attacks and geometric attacks; the watermark storage capacity of the proposed algorithm can be improved with the increase of the volume of 3D medical image data, and the storage capacity of the proposed algorithm has been improved by 93.75% at least compared to the other 2D medical image zero-watermarking algorithms for comparison.

Table and Figures | Reference | Related Articles | Metrics
Attribute reduction algorithm based on cluster granulation and divergence among clusters
Yan LI, Bin FAN, Jie GUO
Journal of Computer Applications    2022, 42 (9): 2701-2712.   DOI: 10.11772/j.issn.1001-9081.2021081371
Abstract242)   HTML10)    PDF (3592KB)(73)    PDF(mobile) (654KB)(12)    Save

Attribute reduction is a hot research topic in rough set theory. Most of the algorithms of attribute reduction for continuous data are based on dominance relations or neighborhood relations. However, continuous datasets do not necessarily have dominance relations in attributes. And the attribute reduction algorithms based on neighborhood relations can adjust the granulation degree through neighborhood radius, but it is difficult to unify the radii due to the different dimensions of attributes and the continuous values of radius parameters, resulting in high computational cost of the whole parameter granulation process. To solve this problem, a multi-granularity attribute reduction strategy based on cluster granulation was proposed. Firstly, the similar samples were classified by the clustering method, and the concepts of approximate set, relative positive region and positive region reduction based on clustering were proposed. Secondly, according to JS (Jensen-Shannon) divergence theory, the difference of data distribution of each attribute among clusters was measured, and representative features were selected to distinguish different clusters. Finally, an attribute reduction algorithm was designed using a discernibility matrix. In the proposed algorithm, the attributes were not required to have ordered relations. Different from neighborhood radius, the clustering parameter was discrete, and the dataset was able to be divided into different granulation degrees by adjusting this parameter. Experimental results on UCI and Kent Ridge datasets show that this attribute reduction algorithm can directly deal with continuous data. At the same time, by using this algorithm, the redundant features in the datasets can be removed while maintaining or even improving the classification accuracy by discrete adjustment of the parameters in a small range.

Table and Figures | Reference | Related Articles | Metrics
Feature construction and preliminary analysis of uncertainty for meta-learning
Yan LI, Jie GUO, Bin FAN
Journal of Computer Applications    2022, 42 (2): 343-348.   DOI: 10.11772/j.issn.1001-9081.2021071198
Abstract461)   HTML67)    PDF (483KB)(186)       Save

Meta-learning is the learning process of applying machine learning methods (meta-algorithms) to seek the mapping between features of a problem (meta-features) and relative performance measures of the algorithm, thereby forming the learning process of meta-knowledge. How to construct and extract meta-features is an important research content. Concerning the problem that most of meta-features used in the existing related researches are statistical features of data, uncertainty modeling was proposed and the impact of uncertainty on learning system was studied. Based on inconsistency of data, complexity of boundary, uncertainty of model output, linear capability to be classified, degree of attribute overlap, and uncertainty of feature space, six kinds of uncertainty meta-features were established for data or models. At the same time,the uncertainty size of the learning problem itself was measured from different perspectives, and specific definitions were given. The correlations between these meta-features were analyzed on artificial datasets and real datasets of a large number of classification problems, and multiple classification algorithms such as K-Nearest Neighbor (KNN) were used to conduct a preliminary analysis of the correlation between meta-features and test accuracy. Results show that the average degree of correlation is about 0.8, indicating that these meta-features have a significant impact on learning performance.

Table and Figures | Reference | Related Articles | Metrics
Deep robust watermarking algorithm based on multiscale knowledge learning
Bin FAN, Zhi LI, Jian GAO
Journal of Computer Applications    2022, 42 (10): 3102-3110.   DOI: 10.11772/j.issn.1001-9081.2021050737
Abstract354)   HTML15)    PDF (3245KB)(205)       Save

Aiming at the problem that existing watermarking algorithms based on deep learning cannot effectively protect the copyright of high-dimensional medical images, a medical image watermarking algorithm based on multiscale knowledge learning was proposed for the copyright protection of diffusion-weighted images. First, a watermark embedding network based on multiscale knowledge learning was proposed to embed watermarks, and the semantic, texture, edge and frequency domain information of the diffusion-weighted image was extracted by a fine-tuned pre-training network as multiscale knowledge features. Then, the multiscale knowledge features were combined to reconstruct the diffusion-weighted image, and a watermark was embedded during the process redundantly to obtain a watermarked diffusion-weighted image highly similar to the original one visually. Finally, a watermark extraction network based on pyramid feature learning was proposed to improve the robustness of the algorithm by learning the distribution correlation of watermarking signals from different scales of context in the watermarked diffusion-weighted image. Experimental results show that the average Peak Signal-to-Noise Ratio (PSNR) of the reconstructed watermarked images by the proposed algorithm reaches 57.82 dB. Since diffusion-weighted images need to meet certain diffusivity features when converting to diffusion tensor images, the proposed algorithm only has 8 pixel points with the deflection angle of the principal axis direction greater than 5°, and none of these 8 pixel points is in the region of interest of the image. Besides, both of the Fraction Anisotropy (FA) and the Mean Diffusivity (MD) of the image generated by the proposed algorithm are close to 0, which fully meets the requirements of clinical diagnosis. At the same time, facing common noise attacks such as those with cropping strength less than 0.7 and rotation angle less than 15, the proposed algorithm achieves more than 95% watermarking accuracy and can effectively protect the copyright information of diffusion-weighted images.

Table and Figures | Reference | Related Articles | Metrics
Fault detection approach for MPSoC by redundancy core
TANG Liu HUANG Zhangqin HOU Yibin FANG Fengcai ZHANG Huibing
Journal of Computer Applications    2014, 34 (1): 41-45.   DOI: 10.11772/j.issn.1001-9081.2014.01.0041
Abstract492)      PDF (737KB)(408)       Save
For a better trade-off between fault-tolerance mechanism and fault-tolerance overhead in processor reliability research, a fault detection approach for Multi-Processor System-on-Chip (MPSoC) that placed the calculation task of detecting code on redundancy core was proposed in this paper. The approach achieved MPSoC failure detection by placing the calculation and comparison parts of detecting code on redundancy core. The technique required no additional hardware modification, and shortened the design cycle while reducing performance and memory overheads. The verification experiment was implemented on a MPSoC by fault injection and running multiple benchmark programs. Comparing several previous methods of fault detection in terms of capability, area, memory and performance overhead, the experiment results show that the approach is effective and able to achieve a better trade-off between performance and overhead.
Related Articles | Metrics
Multi-resolution simplification algorithm for point cloud
YANG Bin FAN Yuan-yuan WANG Ji-dong
Journal of Computer Applications    2011, 31 (10): 2717-2720.   DOI: 10.3724/SP.J.1087.2011.02717
Abstract943)      PDF (768KB)(643)       Save
To efficiently simplify point cloud by multi-resolution, firstly, uniform grids were used to represent the spatial topology relationship of point cloud and calculate the k-nearest neighbors for each data point. Then normal vectors of data points were estimated by constructing covariance matrix, and normal vectors were directed to the outside of the point cloud. Finally, the formulation for measuring the importance of data point was achieved according the effect of this point on eigenvalues spectrum of the Laplace-Beltrami operator, and it was associated with the k-nearest neighbors of this point and normal vectors, and then multi-resolution simplification of point cloud was realized by changing the value of control factor. The experimental result shows that this algorithm has high simplification rate, fast speed, strong stability, and maitains the small detailed information of point cloud.
Related Articles | Metrics